24 research outputs found

    Bidirectional Conditional Generative Adversarial Networks

    Full text link
    Conditional Generative Adversarial Networks (cGANs) are generative models that can produce data samples (xx) conditioned on both latent variables (zz) and known auxiliary information (cc). We propose the Bidirectional cGAN (BiCoGAN), which effectively disentangles zz and cc in the generation process and provides an encoder that learns inverse mappings from xx to both zz and cc, trained jointly with the generator and the discriminator. We present crucial techniques for training BiCoGANs, which involve an extrinsic factor loss along with an associated dynamically-tuned importance weight. As compared to other encoder-based cGANs, BiCoGANs encode cc more accurately, and utilize zz and cc more effectively and in a more disentangled way to generate samples.Comment: To appear in Proceedings of ACCV 201

    On d-Holomorphic Connections

    Full text link
    We develop the theory of d-holomorphic connections on d-holomorphic vector bundles over a Klein surface by constructing the analogous Atiyah exact sequence for d-holomorphic bundles. We also give a criterion for the existence of d-holomorphic connection in d-holomorphic bundle over a Klein surface in the spirit of the Atiyah-Weil criterion for holomorphic connections

    CapsuleGAN: Generative Adversarial Capsule Network

    Full text link
    We present Generative Adversarial Capsule Network (CapsuleGAN), a framework that uses capsule networks (CapsNets) instead of the standard convolutional neural networks (CNNs) as discriminators within the generative adversarial network (GAN) setting, while modeling image data. We provide guidelines for designing CapsNet discriminators and the updated GAN objective function, which incorporates the CapsNet margin loss, for training CapsuleGAN models. We show that CapsuleGAN outperforms convolutional-GAN at modeling image data distribution on MNIST and CIFAR-10 datasets, evaluated on the generative adversarial metric and at semi-supervised image classification.Comment: To appear in Proceedings of ECCV Workshop on Brain Driven Computer Vision (BDCV) 201

    Performance Evaluation of Fine-tuned Faster R-CNN on specific MS COCO Objects

    Get PDF
    Fine-tuning of a model is a method that is most often required to cater to the users’ explicit requirements. But the question remains whether the model is accurate enough to be used for a certain application. This paper strives to present the metrics used for performance evaluation of a Convolutional Neural Network (CNN) model. The evaluation is based on the training process which provides us with intermediate models after every 1000 iterations. While 1000 iterations are not substantial enough over the range of 490k iterations, the groups are sized with 100k iterations each. Now, the intention was to compare the recorded metrics to evaluate the model in terms of accuracy. The training model used the set of specific categories chosen from the Microsoft Common Objects in Context (MS COCO) dataset while allowing the users to use their externally available images to test the model’s accuracy. Our trained model ensured that all the objects are detected that are present in the image to depict the effect of precision

    Invariant Representations through Adversarial Forgetting

    Full text link
    We propose a novel approach to achieving invariance for deep neural networks in the form of inducing amnesia to unwanted factors of data through a new adversarial forgetting mechanism. We show that the forgetting mechanism serves as an information-bottleneck, which is manipulated by the adversarial training to learn invariance to unwanted factors. Empirical results show that the proposed framework achieves state-of-the-art performance at learning invariance in both nuisance and bias settings on a diverse collection of datasets and tasks.Comment: To appear in Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI-20
    corecore